-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update NIP 11 to support relay recommendations #259
Conversation
But this is already the intent of the normal NIPs! |
Technically, yes, the In other words, this nip is focused not on just finding functionality, but on finding state. Picking a random relay that supports a given nip doesn't imply it has the data you want. A simple example is if you have a relay cluster with one master and multiple followers. The master supports publishing, the followers support requests, and they advertise each other. |
Rather than extensions there might be reason to begin specifying a second layer over relays. I've been talking with @brugeman about this. Keep relays dumb - IP routers are an apt metaphor. Let aggregation queries be done in another layer (TCP-like) with its own protocol and clients can pick from "aggregators" like they do relays ... Clients can fallback to "raw" relay and offer a limited feature set if aggregators aren't specified. As more functionality/queries get stuffed into relays, fewer people will implement and run them. |
That's exactly the thought process I went through leading up to this NIP. There are basically two options for supporting advanced functionality:
Option #1 violates the "keep relays dumb" rule. The more functionality that gets packed into relays, the less small relays will be able to keep up with increasing system resource requirements, and the more complex relay implementations will discourage smaller relay implementations (ala Browser Wars). Option #2 breaks decentralization because 1. it weakens the guarantee of interoperability by creating a new protocol (or many protocols, or ad-hoc specifications, which I think is more likely), and 2. second-layer services don't fit into a homogenous relay topology. Relay hints, lists, and other places where relays are a first-class concept are incredibly valuable for allowing users to navigate the network and choose where they want to be. A second layer would need to piggyback on the relay recommendation problem somehow, in order to prevent some other mechanism from selecting gatekeepers for the network. It seems to me the problems associated with option #2 are actually much worse than option #1. If we start with the topology of clients/relays as it already exists in the protocol and in practice, we have the hard problem of decentralization solved. Relays can then reference additional functionality, and clients can follow those references to fulfill their requirements. This has the slightly hidden benefit of improving the relevance of service discovery, since it relies on relays to tell clients which service provider will yield the best results for its users. Finding a content search service doesn't help me if it's indexing the wrong content. Whether these second-layer services are implemented by relays or not is not actually that important. I think they should be, since a consistent API would reduce the complexity of implementation, and would keep the protocol coherent. The idea of adding new API to the protocol is already built in to NIPs, for example So, the key takeaways of the approach I'm suggesting:
Here are a few other ways I've considered for solving service discovery:
|
I think we should talk in more concrete terms here. What do you think of NIP-50? Is that stuffing things into relays? I hope no one is expecting all relays to implement that. Why can't it just fit the normal list of NIPs? |
I agree indexers will have centralizing effects, the intent of this NIP is to get ahead of that, and articulate add-ons as discoverable through relays, rather than sitting in front of them. All functionality for relays is optional, so all NIPs can be considered add-on services. That's why NIPs specifying sophisticated add-ons should keep the same relay protocol, since there's no fundamental difference between simple functionality like being able to publish to a relay and more advanced stuff like SUGGEST or whatever. So for NIP 50, by all means add it to your relay implementation and advertise support via This allows for specially tuned relays to exist and build a business model through specialization, without as much of a risk of centralization. |
I'll also say that my plan for Coracle when I reach the stage where I need some additional piece of functionality that doesn't exist on relays and can't be performed client side is to create a coracle-specific indexer (actually my link preview stuff already sort of is that). I don't want to do that in a way that damages the integrity of the network though, adding my service to a relay implementation and letting my client discover it organically would be much preferred. |
I moved my original comment to a gist: https://gist.github.com/huumn/5dbc53faad5ccac4bd2e6610ca777e69. I don't deserve to have an opinion here until I build more stuff on nostr. Unexperienced blabbering is unproductive. Apologies for the interjection. |
@pablof7z you talked about this problem a bunch in the relays panel yesterday, I'd be interested in your thoughts on this as a way to reduce centralization, encourage data locality, and ask clients to be smarter. |
As I said in #522 I think, naive as I am, that all relay discovery efforts are probably worth exploring. As such I don't really have much to add. I think this design pattern is consistent with the way clients currently discover relay functionality. Another pattern that might be worth considering is one that's a little less stateful and therefore flexible like redirection. ie when client tries to use a nip that isn't supported, the relay responds with a NOTICE that tells them to try it on some other relay. The main advantage this would have over the proposal is it could be dynamic (the relay sees some sub-relay is down, etc) and the client would have a little less global state to manage. |
The intention of this NIP is to create space in the protocol for specifying NIPs for more advanced use cases without forcing relays to implement them or be considered a partial implementation.
Some use cases that would be prohibitively expensive for hobby relays:
Without discoverability, clients will hard-code special-purpose relays, creating a centralizing pressure on the network. In addition, some relays might not be picked up by the known/hard-coded add-on services, leaving their data out of calculations.
Recommendations would allow relays to advertise complementary relays to clients. Complementary might mean that it's run by the same people, contains the same data, or is a preferred source for the given functionality. Master/slave configurations could be set up using this scheme too — master relays would not support requests, while slaves would not support publishing events.
The downside of this is that it relies on clients being smart and following these references. Someone told me about how dApps used to follow this model, but it slowed things down and developers ended up getting lazy and hardcoding services anyway.